Knowledge graph (KG) link prediction aims to infer new facts based on existing facts in the KG. Recent studies have shown that using the graph neighborhood of a node via graph neural networks (GNNs) provides more useful information compared to just using the query information. Conventional GNNs for KG link prediction follow the standard message-passing paradigm on the entire KG, which leads to over-smoothing of representations and also limits their scalability. On a large scale, it becomes computationally expensive to aggregate useful information from the entire KG for inference. To address the limitations of existing KG link prediction frameworks, we propose a novel retrieve-and-read framework, which first retrieves a relevant subgraph context for the query and then jointly reasons over the context and the query with a high-capacity reader. As part of our exemplar instantiation for the new framework, we propose a novel Transformer-based GNN as the reader, which incorporates graph-based attention structure and cross-attention between query and context for deep fusion. This design enables the model to focus on salient context information relevant to the query. Empirical results on two standard KG link prediction datasets demonstrate the competitive performance of the proposed method.
translated by 谷歌翻译
深度学习的最新进展极大地推动了语义解析的研究。此后,在许多下游任务中进行了改进,包括Web API的自然语言接口,文本到SQL的生成等。但是,尽管与这些任务有着密切的联系,但有关知识库的问题的研究(KBQA)的进展相对缓慢。我们将其确定并归因于KBQA的两个独特挑战,模式级的复杂性和事实级别的复杂性。在这项调查中,我们将KBQA放置在更广泛的语义解析文献中,并全面说明了现有的KBQA方法如何试图应对独特的挑战。无论面临什么独特的挑战,我们都认为我们仍然可以从语义解析的文献中汲取太大的灵感,这被现有的KBQA研究所忽略了。基于我们的讨论,我们可以更好地了解当前KBQA研究的瓶颈,并阐明KBQA的有希望的方向,以跟上语义解析的文献,尤其是在预训练的语言模型时代。
translated by 谷歌翻译